Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 56 minutes and 46 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (45)
📝 WalkthroughWalkthroughThis PR refactors logging by migrating from Zap to Go’s structured 🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
internal/store/postgres/postgres.go (1)
41-42: Prefer returning startup errors over panicking from the store layer.This preserves structured logs and avoids an extra unstructured panic dump on stderr. If changing
NewStore’s signature is too large for this PR, consider tracking it as follow-up.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/store/postgres/postgres.go` around lines 41 - 42, The store layer should return initialization errors instead of panicking: change NewStore (the constructor around slog.Error and panic(err)) to return (/*store type*/, error), remove the panic(err) and replace it with returning the error after logging (use slog.Error(...,"error", err)), and update callers of NewStore to handle the returned error and propagate or fail gracefully; ensure all references to panic(err) in postgres.go are removed and replaced with proper error returns.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@go.mod`:
- Line 3: go.mod was bumped to "go 1.25" which requires CI to run Go >=1.21;
update the CI workflow YAMLs that set the go-version (currently "1.20" or
"^1.20") to at least "1.21" or match "1.25" so builds/test use a compatible
toolchain. Locate the CI YAMLs that contain the go-version keys and change them
to a compatible version (e.g., "1.21" or "1.25") or a caret range that includes
1.25, then rerun CI to verify the slog-related build succeeds.
In `@internal/store/postgres/postgres.go`:
- Line 37: The code forces pgx verbosity to Warn by setting
cc.ConnConfig.LogLevel = pgx.LogLevelWarn which prevents Debug/Info DB query
events from reaching the PgxLogger adapter; update NewStore() (or its caller) to
accept the configured log level and map that to cc.ConnConfig.LogLevel (e.g.,
map your slog/config levels to pgx.LogLevelDebug/Info/Warn/Error) so pgx emits
the same verbosity as the application, or alternatively add documentation near
NewStore()/cc.ConnConfig.LogLevel explaining that pgx query logs are
intentionally suppressed below Warn.
- Around line 35-37: ParseConfig’s error is being ignored causing a possible
nil/invalid cc access when setting cc.ConnConfig; change the code around
pgxpool.ParseConfig(conn) so you capture the returned error, check it
immediately, and log/return it before touching cc.ConnConfig or
cc.ConnConfig.Logger (references: pgxpool.ParseConfig, cc, cc.ConnConfig,
logger.PgxLogger, cc.ConnConfig.LogLevel). Ensure the function returns or logs
the ParseConfig error (the same structured error path used later) to avoid panic
and only set ConnConfig.Logger/LogLevel after confirming cc is non-nil and err
== nil.
In `@pkg/logger/pgx.go`:
- Around line 13-17: PgxLogger.Log currently iterates and logs all entries from
the pgx `data` map (including sensitive keys like "args"), so change it to
redact or allowlist before building slog.Attr list: inside PgxLogger.Log inspect
`data` and either (a) use an allowlist of safe keys (e.g., "sql", "command_tag",
"duration", "rows") and only append those to `attrs`, or (b) for any key that is
not explicitly safe (e.g., "args", "password", "params"), append a placeholder
value like "<redacted>" instead of the real value; ensure you still preserve
types for safe fields and that the logic is applied before creating `attrs`.
---
Nitpick comments:
In `@internal/store/postgres/postgres.go`:
- Around line 41-42: The store layer should return initialization errors instead
of panicking: change NewStore (the constructor around slog.Error and panic(err))
to return (/*store type*/, error), remove the panic(err) and replace it with
returning the error after logging (use slog.Error(...,"error", err)), and update
callers of NewStore to handle the returned error and propagate or fail
gracefully; ensure all references to panic(err) in postgres.go are removed and
replaced with proper error returns.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7c260f5a-e979-49d8-aa98-527c079b6c39
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (9)
config/config.goformats/json/compatibility.goformats/json/schema.gogo.modinternal/middleware/recovery.gointernal/server/server.gointernal/store/postgres/postgres.gopkg/logger/logger.gopkg/logger/pgx.go
| cc, _ := pgxpool.ParseConfig(conn) | ||
| cc.ConnConfig.Logger = zapadapter.NewLogger(logger.Logger) | ||
| cc.ConnConfig.Logger = &logger.PgxLogger{} | ||
| cc.ConnConfig.LogLevel = pgx.LogLevelWarn |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and read the file
find . -name "postgres.go" -path "*/store/postgres/*" | head -5Repository: raystack/stencil
Length of output: 97
🏁 Script executed:
# Read the relevant section of the file
cat -n internal/store/postgres/postgres.go | head -50Repository: raystack/stencil
Length of output: 1611
🏁 Script executed:
# Check pgx library usage and imports
rg "pgxpool|pgx" internal/store/postgres/postgres.go -B 2 -A 2Repository: raystack/stencil
Length of output: 867
🏁 Script executed:
# Search for any test files or usage examples that show ParseConfig error handling
rg "ParseConfig" --type go -B 2 -A 2Repository: raystack/stencil
Length of output: 431
🌐 Web query:
pgxpool.ParseConfig error handling pgx v4 nil pointer
💡 Result:
In pgx v4, pgxpool.ParseConfig parses a PostgreSQL connection string and returns a *pgxpool.Config or an error. It delegates initial parsing to pgx.ParseConfig (which calls pgconn.ParseConfig), then processes pool-specific parameters like pool_max_conns from RuntimeParams, setting defaults if absent (e.g., MaxConns=4). Errors from invalid parameters (e.g., non-integer pool_max_conns) are returned as pgconn.ParseConfigError via pgconn.NewParseConfigError. No evidence of nil pointer panics directly in ParseConfig itself across source code and issues. Nil pointer dereferences in pgx v4 pgxpool typically occur elsewhere, such as: - Passing nil pointers to Scan methods, causing panics in pgtype scanning that previously hung the pool on Close due to unreleased connections (fixed in later v5, but similar behavior possible in v4). - Using nil *pgxpool.Pool (e.g., from failed New/Connect where err ignored and := shadows variable). - Type encoding issues with nil interfaces/pointers in queries (more common in v5). To handle ParseConfig errors safely: config, err := pgxpool.ParseConfig(connString) if err != nil { // Handle error, e.g., log and return/fail return nil, fmt.Errorf("failed to parse config: %w", err) } // Modify config as needed, e.g., config.MaxConns = 10 pool, err := pgxpool.NewWithConfig(ctx, config) if err != nil { return nil, fmt.Errorf("failed to create pool: %w", err) } defer pool.Close Always check err != nil after ParseConfig and NewWithConfig/New/ConnectConfig. Never ignore err or use pool if err != nil. For query operations, always pass addresses (&var) to Scan to avoid nil pointer panics.
Citations:
- 1: https://pkg.go.dev/github.com/ameerbrar/pgx/v4/pgxpool
- 2: https://pkg.go.dev/github.com/jackc/pgx/v4/pgxpool
- 3: https://github.com/jackc/pgx/blob/master/pgxpool/pool.go
- 4: https://github.com/jackc/pgx/blob/777dec03/pgxpool/pool.go
- 5: https://github.com/jackc/pgx/blob/v4.18.3/conn.go
🏁 Script executed:
# Check if there are any other ParseConfig calls in the codebase that handle errors correctly
rg "ParseConfig" --type go -A 5 | grep -A 3 "err"Repository: raystack/stencil
Length of output: 213
Handle ParseConfig errors before touching ConnConfig.
If the connection string is invalid, ParseConfig returns an error that is silently ignored with _. Accessing cc.ConnConfig on line 36 when cc may be in an undefined state can panic before the intended structured error log on line 41.
Proposed fix
func NewStore(conn string) *DB {
- cc, _ := pgxpool.ParseConfig(conn)
+ cc, err := pgxpool.ParseConfig(conn)
+ if err != nil {
+ slog.Error("failed to parse database config", "error", err)
+ panic(err)
+ }
cc.ConnConfig.Logger = &logger.PgxLogger{}
cc.ConnConfig.LogLevel = pgx.LogLevelWarn
pgxPool, err := pgxpool.ConnectConfig(context.Background(), cc)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cc, _ := pgxpool.ParseConfig(conn) | |
| cc.ConnConfig.Logger = zapadapter.NewLogger(logger.Logger) | |
| cc.ConnConfig.Logger = &logger.PgxLogger{} | |
| cc.ConnConfig.LogLevel = pgx.LogLevelWarn | |
| func NewStore(conn string) *DB { | |
| cc, err := pgxpool.ParseConfig(conn) | |
| if err != nil { | |
| slog.Error("failed to parse database config", "error", err) | |
| panic(err) | |
| } | |
| cc.ConnConfig.Logger = &logger.PgxLogger{} | |
| cc.ConnConfig.LogLevel = pgx.LogLevelWarn | |
| pgxPool, err := pgxpool.ConnectConfig(context.Background(), cc) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/store/postgres/postgres.go` around lines 35 - 37, ParseConfig’s
error is being ignored causing a possible nil/invalid cc access when setting
cc.ConnConfig; change the code around pgxpool.ParseConfig(conn) so you capture
the returned error, check it immediately, and log/return it before touching
cc.ConnConfig or cc.ConnConfig.Logger (references: pgxpool.ParseConfig, cc,
cc.ConnConfig, logger.PgxLogger, cc.ConnConfig.LogLevel). Ensure the function
returns or logs the ParseConfig error (the same structured error path used
later) to avoid panic and only set ConnConfig.Logger/LogLevel after confirming
cc is non-nil and err == nil.
Coverage Report for CI Build 24734032200Warning No base build found for commit Coverage: 17.686%Details
Uncovered Changes
Coverage RegressionsRequires a base build to compare against. How to fix this → Coverage Stats
💛 - Coveralls |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
formats/protobuf/provider_test.go (1)
58-62:⚠️ Potential issue | 🟡 MinorUnchecked error from
rand.Read.
crypto/rand.Readreturns an error that is being discarded. While Go 1.24+ documents that it never returns a non-nil error (panicking instead on failure), errcheck/linters may still flag this, and being explicit is safer for forward compatibility. Consider handling it:Proposed fix
func getRandomName() string { b := make([]byte, 10) - rand.Read(b) + if _, err := rand.Read(b); err != nil { + panic(err) + } return hex.EncodeToString(b) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@formats/protobuf/provider_test.go` around lines 58 - 62, The getRandomName function currently ignores the error returned by crypto/rand.Read; update getRandomName to capture and handle that error (e.g., check err := rand.Read(b) and if err != nil either return a deterministic fallback or fail fast with a panic/log.Fatalf) so linters won’t flag the unchecked error; modify the function named getRandomName to perform the err check and handle the error path accordingly.formats/protobuf/compatibility.go (1)
113-210:⚠️ Potential issue | 🟡 MinorUnrelated change in a logging-migration PR.
This file tightens reserved-range inclusivity checks (AND → OR) in
compareMessages(line 118) andcompareEnums(line 175), which is unrelated to the stated objective of migrating from zap to slog. Mixing behavioral changes to compatibility semantics into a logging refactor PR makes review/bisect harder and risks regressions slipping past reviewers focused on logging. Consider splitting this into its own PR with a dedicated description and, ideally, test cases that specifically exercise the new "either endpoint missing" path (the existing test fixtures incompatibility_test.goappear to already satisfy both the old AND and the new OR condition, so the stricter semantics may not be explicitly covered).Also note the PR description states tests pass "except pre-existing formats/protobuf" — please confirm whether that failure is actually pre-existing or was introduced/affected by this semantic change.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@formats/protobuf/compatibility.go` around lines 113 - 210, Revert the tightened reserved-range check back to the original "both endpoints must be present" semantics in compareMessages and compareEnums: in compareMessages (function compareMessages) change the condition that checks current.ReservedRanges() for start and end-1 to require both Has(start) AND Has(end-1) (restore && instead of ||), and in compareEnums (function compareEnums) require both Has(start) AND Has(end) for non-inclusive detection (restore &&); keep the special-case handling when start==end unchanged and ensure the nonInclusivereservedRange diagnostic is used as before. Also remove this behavioral change from the logging-migration PR and split it into its own PR with focused tests exercising the endpoint-missing paths so compatibility_test.go covers the new semantics..github/workflows/test-server.yaml (1)
22-36:⚠️ Potential issue | 🟠 MajorUpdate docker-compose.yml to postgres:15 to match test workflow.
The test workflow now uses postgres:15, but docker-compose.yml still specifies postgres:13. This causes CI/local developer divergence.
SQL migrations use only generic, backwards-compatible syntax (standard CREATE TABLE, JSONB, GIN indexes) — no postgres 13-specific features found, so the version bump is safe. pgx v4.18.3 (pinned in go.mod) is compatible with postgres 15 per the official upstream support policy.
Update
docker-compose.ymlline 4 frompostgres:13topostgres:15.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/test-server.yaml around lines 22 - 36, Change the postgres service image in docker-compose from postgres:13 to postgres:15 to match the test workflow; locate the docker-compose postgres service (the "postgres" service block and its "image" key) and update the image value from "postgres:13" to "postgres:15". Ensure no other references to postgres:13 remain in the compose file.
♻️ Duplicate comments (1)
internal/store/postgres/postgres.go (1)
35-37:⚠️ Potential issue | 🔴 CriticalCheck
ParseConfigbefore dereferencing the config.
pgxpool.ParseConfigstill discardserr; an invalid DSN can leaveccunusable before the structured connect log runs. Line 37 also forces pgx to Warn regardless of app log level, so either keep that suppression documented here or wire it to config if DB debug logs are expected.🐛 Proposed fix
func NewStore(conn string) *DB { - cc, _ := pgxpool.ParseConfig(conn) + cc, err := pgxpool.ParseConfig(conn) + if err != nil { + slog.Error("failed to parse database config", "error", err) + panic(err) + } cc.ConnConfig.Logger = &logger.PgxLogger{} + // Keep pgx at warn to suppress noisy query logs regardless of app log level. cc.ConnConfig.LogLevel = pgx.LogLevelWarn pgxPool, err := pgxpool.ConnectConfig(context.Background(), cc)Verification:
#!/bin/bash # Description: Inspect pgx config parsing and log-level wiring. # Expect: ParseConfig errors are checked before ConnConfig is used, and pgx log-level policy is documented or config-driven. rg -n -C3 '\bpgxpool\.ParseConfig\(|ConnConfig\.Logger|ConnConfig\.LogLevel|LogLevel' --type go🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/store/postgres/postgres.go` around lines 35 - 37, Check the error returned by pgxpool.ParseConfig before using the returned config (avoid dereferencing cc when ParseConfig returns an error) and handle or return that error from the function; locate the pgxpool.ParseConfig call and replace the unchecked pattern that then sets cc.ConnConfig.Logger and cc.ConnConfig.LogLevel so you only set cc.ConnConfig.Logger = &logger.PgxLogger{} and cc.ConnConfig.LogLevel when cc is valid. Also make the pgx log level configurable (wire ConnConfig.LogLevel to your app DB debug setting) or add a clear comment documenting why LogLevel is forced to pgx.LogLevelWarn (adjust the code paths that set ConnConfig.LogLevel to use the app config variable instead of hardcoding).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/release-server.yml:
- Around line 5-6: The tags filter in release-server.yml has been broadened to
"- "v*"" which will trigger the workflow for any tag starting with "v"; to
restrict releases to semver tags change that value back to a semver pattern such
as "- "v*.*.*"" or use a stricter regex-like pattern (e.g.
"v[0-9]+.[0-9]+.[0-9]+(-.*)?") in the tags: field; if the wider matching is
intentional for pre-release/short tags, leave "v*" as-is, otherwise replace the
current "v*" entry in the tags: list with one of the semver patterns ("v*.*.*"
or the regex form).
In @.github/workflows/test-server.yaml:
- Around line 53-58: The "Install goveralls and send coverage" step currently
uses a floating go install (`@latest`) and runs only on successful jobs; change
the go install invocation to pin goveralls to a fixed released tag (replace
github.com/mattn/goveralls@latest with
github.com/mattn/goveralls@<specific-version>) so CI is reproducible, and add
if: always() to the step so coverage upload runs even when tests fail (leave
COVERALLS_TOKEN and -service=github as-is).
In `@core/schema/service.go`:
- Around line 127-134: The code calls s.repo.ListVersions and ignores its error,
then proceeds to delete storage and remove cache entries; if ListVersions fails
but Delete succeeds this leaves stale cache keys. Change the flow in the Delete
handler (where s.repo.ListVersions, s.repo.Delete, s.cache.Del and schemaKeyFunc
are used) to first call s.repo.ListVersions and if it returns an error return
that error immediately; only when ListVersions succeeds proceed to call
s.repo.Delete and, on successful delete, iterate versions and call
s.cache.Del(schemaKeyFunc(...)) to evict keys.
In `@formats/protobuf/compatibility.go`:
- Line 118: The check using current.ReservedRanges().Has(start) and Has(end-1)
only verifies endpoints and can miss interior gaps; update the logic in the
field-range validation (the block using variables current, start, end and
calling ReservedRanges().Has and dealing with protoreflect.FieldRange) to ensure
full coverage of [start,end) by either iterating i from start to end-1 and
asserting current.ReservedRanges().Has(i) for each i, or by checking that a
single existing current range fully contains start and end-1; apply the
analogous fix for enum reserved ranges (the code handling protoreflect.EnumRange
/ inclusive [start,end] at the other check) so the entire integer span is
validated rather than only endpoints.
In `@internal/api/api.go`:
- Around line 115-120: The handler currently calls
w.WriteHeader(http.StatusCreated) before json.NewEncoder(w).Encode(sc), so if
encoding fails you cannot change the committed status; fix by encoding to a
temporary buffer (e.g., bytes.Buffer) first or by encoding before calling
w.WriteHeader, then only call w.WriteHeader and copy the buffer to w on success;
if encoding fails log the error via slog (do not call http.Error after the
response is committed) and return. Apply the same pattern in the writeError
path: do not call http.Error after WriteHeader — instead prepare the error body
before committing the status and log any encoding/writing errors with slog.
In `@internal/store/postgres/postgres_test.go`:
- Line 20: The teardown currently ignores the result of m.Down(), which can
leave the test DB in a dirty state; update the test to check m.Down()'s returned
error and fail the test if it is non-nil (e.g., call t.Fatalf or require.NoError
with the error) so teardown errors are surfaced and tests do not continue
silently with a bad DB state; locate the call to m.Down() in the test (the
m.Down() invocation) and replace the silent discard with an explicit error
assertion.
In `@internal/store/postgres/schema_repository.go`:
- Around line 88-99: The delete should be combined with the orphan cleanup
inside a single DB transaction so both succeed or both roll back: replace the
separate r.db.Exec calls for deleteSchemaQuery and deleteOrphanedData in the
method that currently checks ct.RowsAffected() with a r.db.BeginFunc transaction
block, perform the DELETE (deleteSchemaQuery) and ensure rowsAffected > 0
(return store.NoRowsErr via wrapError if zero) then run the orphan cleanup
(deleteOrphanedData) inside the same transaction; propagate any error so the
transaction rolls back and cache invalidation logic in core/schema/service.go
only runs when both ops committed. Ensure this change is applied similarly to
the other occurrence around lines 109-120.
---
Outside diff comments:
In @.github/workflows/test-server.yaml:
- Around line 22-36: Change the postgres service image in docker-compose from
postgres:13 to postgres:15 to match the test workflow; locate the docker-compose
postgres service (the "postgres" service block and its "image" key) and update
the image value from "postgres:13" to "postgres:15". Ensure no other references
to postgres:13 remain in the compose file.
In `@formats/protobuf/compatibility.go`:
- Around line 113-210: Revert the tightened reserved-range check back to the
original "both endpoints must be present" semantics in compareMessages and
compareEnums: in compareMessages (function compareMessages) change the condition
that checks current.ReservedRanges() for start and end-1 to require both
Has(start) AND Has(end-1) (restore && instead of ||), and in compareEnums
(function compareEnums) require both Has(start) AND Has(end) for non-inclusive
detection (restore &&); keep the special-case handling when start==end unchanged
and ensure the nonInclusivereservedRange diagnostic is used as before. Also
remove this behavioral change from the logging-migration PR and split it into
its own PR with focused tests exercising the endpoint-missing paths so
compatibility_test.go covers the new semantics.
In `@formats/protobuf/provider_test.go`:
- Around line 58-62: The getRandomName function currently ignores the error
returned by crypto/rand.Read; update getRandomName to capture and handle that
error (e.g., check err := rand.Read(b) and if err != nil either return a
deterministic fallback or fail fast with a panic/log.Fatalf) so linters won’t
flag the unchecked error; modify the function named getRandomName to perform the
err check and handle the error path accordingly.
---
Duplicate comments:
In `@internal/store/postgres/postgres.go`:
- Around line 35-37: Check the error returned by pgxpool.ParseConfig before
using the returned config (avoid dereferencing cc when ParseConfig returns an
error) and handle or return that error from the function; locate the
pgxpool.ParseConfig call and replace the unchecked pattern that then sets
cc.ConnConfig.Logger and cc.ConnConfig.LogLevel so you only set
cc.ConnConfig.Logger = &logger.PgxLogger{} and cc.ConnConfig.LogLevel when cc is
valid. Also make the pgx log level configurable (wire ConnConfig.LogLevel to
your app DB debug setting) or add a clear comment documenting why LogLevel is
forced to pgx.LogLevelWarn (adjust the code paths that set ConnConfig.LogLevel
to use the app config variable instead of hardcoding).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d264e4ff-4205-4118-b44d-45cbfa954317
📒 Files selected for processing (38)
.github/workflows/docs.yml.github/workflows/lint.yml.github/workflows/release-server.yml.github/workflows/test-server.yaml.golangci.yamlcmd/check.gocmd/create.gocmd/delete.gocmd/diff.gocmd/download.gocmd/edit.gocmd/graph.gocmd/info.gocmd/list.gocmd/namespace.gocmd/print.goconfig/build.goconfig/load.gocore/namespace/namespace.gocore/namespace/service.gocore/schema/compatibility.gocore/schema/mocks/schema_cache.gocore/schema/schema.gocore/schema/service.gocore/search/search.gocore/search/service.goformats/json/utils.goformats/protobuf/compatibility.goformats/protobuf/provider_test.gointernal/api/api.gointernal/store/errors.gointernal/store/postgres/namespace_repository.gointernal/store/postgres/postgres.gointernal/store/postgres/postgres_test.gointernal/store/postgres/schema_repository.gopkg/graph/graph.gopkg/logger/pgx.goui/embed.go
✅ Files skipped from review due to trivial changes (18)
- internal/store/errors.go
- cmd/create.go
- cmd/info.go
- cmd/edit.go
- pkg/graph/graph.go
- ui/embed.go
- config/build.go
- cmd/download.go
- config/load.go
- core/search/search.go
- cmd/check.go
- cmd/delete.go
- core/schema/compatibility.go
- core/namespace/namespace.go
- core/namespace/service.go
- core/search/service.go
- formats/json/utils.go
- cmd/list.go
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/logger/pgx.go
| tags: | ||
| - "v*.*.*" | ||
| - "v*" |
There was a problem hiding this comment.
Tag filter widened from semver to any v* tag.
Changing v*.*.* → v* means any tag starting with v (e.g. v-test, valpha, v1, v2-rc) will now trigger the release + Docker publish workflow. If that's intentional to support pre-release / short tags, ignore; otherwise consider restricting to semver-shaped tags, e.g.:
- - "v*"
+ - "v[0-9]+.[0-9]+.[0-9]+"
+ - "v[0-9]+.[0-9]+.[0-9]+-*"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/release-server.yml around lines 5 - 6, The tags filter in
release-server.yml has been broadened to "- "v*"" which will trigger the
workflow for any tag starting with "v"; to restrict releases to semver tags
change that value back to a semver pattern such as "- "v*.*.*"" or use a
stricter regex-like pattern (e.g. "v[0-9]+.[0-9]+.[0-9]+(-.*)?") in the tags:
field; if the wider matching is intentional for pre-release/short tags, leave
"v*" as-is, otherwise replace the current "v*" entry in the tags: list with one
of the semver patterns ("v*.*.*" or the regex form).
| - name: Install goveralls and send coverage | ||
| env: | ||
| COVERALLS_TOKEN: ${{ secrets.GITHUB_TOKEN }} | ||
| run: | | ||
| go install github.com/mattn/goveralls@latest | ||
| goveralls -coverprofile=coverage.out -service=github |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that `make test` produces coverage.out at the repo root and check the test target definition.
fd -HI -t f '^Makefile$' --max-depth 3
rg -nP -C2 '^(test|coverage)\s*:' Makefile 2>/dev/null || true
rg -n 'coverage\.out|-coverprofile' Makefile 2>/dev/null || trueRepository: raystack/stencil
Length of output: 518
🏁 Script executed:
#!/bin/bash
# Check if v0.0.12 is a valid version and look for available versions of goveralls
curl -s https://api.github.com/repos/mattn/goveralls/releases | jq -r '.[0:5] | .[] | "\(.tag_name)"' 2>/dev/null || echo "Unable to fetch releases"Repository: raystack/stencil
Length of output: 84
Harden the coverage-upload step.
Two concrete issues worth addressing:
go install github.com/mattn/goveralls@latestis a floating pin — pin to a specific version so CI behavior is reproducible if upstream ships a breaking change.- Because there's no
if: always(), a single test failure prevents coverage upload entirely. Coverage trends are lost on flaky tests. Considerif: always()to capture coverage even when tests fail (or accept this intentionally).
The assumption that make test produces coverage.out is correct: the test target in the Makefile explicitly uses -coverprofile=coverage.out. Using secrets.GITHUB_TOKEN as COVERALLS_TOKEN with -service=github is also the documented pattern, so that part is fine.
♻️ Suggested change
- name: Install goveralls and send coverage
+ if: always()
env:
COVERALLS_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
- go install github.com/mattn/goveralls@latest
+ go install github.com/mattn/goveralls@v0.0.12
goveralls -coverprofile=coverage.out -service=github🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/test-server.yaml around lines 53 - 58, The "Install
goveralls and send coverage" step currently uses a floating go install (`@latest`)
and runs only on successful jobs; change the go install invocation to pin
goveralls to a fixed released tag (replace github.com/mattn/goveralls@latest
with github.com/mattn/goveralls@<specific-version>) so CI is reproducible, and
add if: always() to the step so coverage upload runs even when tests fail (leave
COVERALLS_TOKEN and -service=github as-is).
| prevRange := prevRanges.Get(i) | ||
| start, end := prevRange[0], prevRange[1] | ||
| if !(current.ReservedRanges().Has(start) && current.ReservedRanges().Has(end-1)) { | ||
| if !current.ReservedRanges().Has(start) || !current.ReservedRanges().Has(end-1) { |
There was a problem hiding this comment.
Endpoint-only check still allows gaps in the middle of the range.
Switching from AND to OR makes the check stricter and is a reasonable improvement, but checking only start and end-1 can still miss a case where the endpoints exist in current but an interior number (e.g., start+1) is no longer reserved — that range would incorrectly be treated as inclusive. If the intent is to ensure the previous reserved range is fully preserved, iterate over [start, end) (messages are half-open per protoreflect.FieldRange) and assert each number is present, or verify that a single current range fully covers [start, end).
Same consideration applies at line 175 for enums (where ranges are inclusive [start, end] per protoreflect.EnumRange).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@formats/protobuf/compatibility.go` at line 118, The check using
current.ReservedRanges().Has(start) and Has(end-1) only verifies endpoints and
can miss interior gaps; update the logic in the field-range validation (the
block using variables current, start, end and calling ReservedRanges().Has and
dealing with protoreflect.FieldRange) to ensure full coverage of [start,end) by
either iterating i from start to end-1 and asserting
current.ReservedRanges().Has(i) for each i, or by checking that a single
existing current range fully contains start and end-1; apply the analogous fix
for enum reserved ranges (the code handling protoreflect.EnumRange / inclusive
[start,end] at the other check) so the entire integer span is validated rather
than only endpoints.
| w.Header().Set("Content-Type", "application/json") | ||
| w.WriteHeader(http.StatusCreated) | ||
| json.NewEncoder(w).Encode(sc) | ||
| if err := json.NewEncoder(w).Encode(sc); err != nil { | ||
| http.Error(w, err.Error(), http.StatusInternalServerError) | ||
| } | ||
| } |
There was a problem hiding this comment.
http.Error after WriteHeader won't override the status.
w.WriteHeader(http.StatusCreated) at line 116 commits the response headers, so if Encode fails partway through, the subsequent http.Error(w, ..., 500) can't change the status (it'll just log a "superfluous response.WriteHeader" warning and append the error text to a response already returning 201). The same pattern exists in writeError (Line 155-157).
At this point the response is effectively unrecoverable; logging the encoder error (especially now that slog is available) and returning is usually more honest than pretending to send a 500.
Suggested change
- w.Header().Set("Content-Type", "application/json")
- w.WriteHeader(http.StatusCreated)
- if err := json.NewEncoder(w).Encode(sc); err != nil {
- http.Error(w, err.Error(), http.StatusInternalServerError)
- }
+ w.Header().Set("Content-Type", "application/json")
+ w.WriteHeader(http.StatusCreated)
+ if err := json.NewEncoder(w).Encode(sc); err != nil {
+ slog.Error("failed to encode upload schema response", "err", err)
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| w.Header().Set("Content-Type", "application/json") | |
| w.WriteHeader(http.StatusCreated) | |
| json.NewEncoder(w).Encode(sc) | |
| if err := json.NewEncoder(w).Encode(sc); err != nil { | |
| http.Error(w, err.Error(), http.StatusInternalServerError) | |
| } | |
| } | |
| w.Header().Set("Content-Type", "application/json") | |
| w.WriteHeader(http.StatusCreated) | |
| if err := json.NewEncoder(w).Encode(sc); err != nil { | |
| slog.Error("failed to encode upload schema response", "err", err) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/api/api.go` around lines 115 - 120, The handler currently calls
w.WriteHeader(http.StatusCreated) before json.NewEncoder(w).Encode(sc), so if
encoding fails you cannot change the committed status; fix by encoding to a
temporary buffer (e.g., bytes.Buffer) first or by encoding before calling
w.WriteHeader, then only call w.WriteHeader and copy the buffer to w on success;
if encoding fails log the error via slog (do not call http.Error after the
response is committed) and return. Apply the same pattern in the writeError
path: do not call http.Error after WriteHeader — instead prepare the error body
before committing the status and log any encoding/writing errors with slog.
| ct, err := r.db.Exec(ctx, deleteSchemaQuery, ns, sc) | ||
| if err != nil { | ||
| return wrapError(err, "delete schema") | ||
| } | ||
| if ct.RowsAffected() == 0 { | ||
| return store.NoRowsErr.WithErr(nil, "schema") | ||
| } | ||
| // Idempotent operation to clean orphaned data. | ||
| r.db.Exec(ctx, deleteOrphanedData) | ||
| return wrapError(err, "delete schema") | ||
| if _, execErr := r.db.Exec(ctx, deleteOrphanedData); execErr != nil { | ||
| return wrapError(execErr, "delete schema") | ||
| } | ||
| return nil |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Inspect schema delete/cache invalidation flow.
# Expect: committed deletes cannot return cleanup errors that prevent cache invalidation.
rg -n -C4 'func \(r \*SchemaRepository\) Delete|func \(r \*SchemaRepository\) DeleteVersion|deleteOrphanedData|BeginFunc|cache\.Del' --type goRepository: raystack/stencil
Length of output: 6121
Wrap primary delete and orphan cleanup in a single transaction to ensure cache invalidation is consistent with database state.
When deleteOrphanedData fails after the primary DELETE has already committed, the rows are gone from the database but core/schema/service.go skips cache invalidation because the error is returned. This leaves stale cached entries for deleted rows. Use r.db.BeginFunc to make both operations atomic—if cleanup fails, the entire transaction rolls back so the error correctly prevents cache invalidation.
Proposed fix
func (r *SchemaRepository) Delete(ctx context.Context, ns string, sc string) error {
- ct, err := r.db.Exec(ctx, deleteSchemaQuery, ns, sc)
- if err != nil {
- return wrapError(err, "delete schema")
- }
- if ct.RowsAffected() == 0 {
- return store.NoRowsErr.WithErr(nil, "schema")
- }
- // Idempotent operation to clean orphaned data.
- if _, execErr := r.db.Exec(ctx, deleteOrphanedData); execErr != nil {
- return wrapError(execErr, "delete schema")
- }
- return nil
+ return r.db.BeginFunc(ctx, func(tx pgx.Tx) error {
+ ct, err := tx.Exec(ctx, deleteSchemaQuery, ns, sc)
+ if err != nil {
+ return wrapError(err, "delete schema")
+ }
+ if ct.RowsAffected() == 0 {
+ return store.NoRowsErr.WithErr(nil, "schema")
+ }
+ // Idempotent operation to clean orphaned data.
+ if _, execErr := tx.Exec(ctx, deleteOrphanedData); execErr != nil {
+ return wrapError(execErr, "delete schema")
+ }
+ return nil
+ })
}
func (r *SchemaRepository) DeleteVersion(ctx context.Context, ns string, sc string, version int32) error {
- ct, err := r.db.Exec(ctx, deleteVersionQuery, ns, sc, version)
- if err != nil {
- return wrapError(err, "delete version")
- }
- if ct.RowsAffected() == 0 {
- return store.NoRowsErr.WithErr(nil, "version")
- }
- // Idempotent operation to clean orphaned data.
- if _, execErr := r.db.Exec(ctx, deleteOrphanedData); execErr != nil {
- return wrapError(execErr, "delete version")
- }
- return nil
+ return r.db.BeginFunc(ctx, func(tx pgx.Tx) error {
+ ct, err := tx.Exec(ctx, deleteVersionQuery, ns, sc, version)
+ if err != nil {
+ return wrapError(err, "delete version")
+ }
+ if ct.RowsAffected() == 0 {
+ return store.NoRowsErr.WithErr(nil, "version")
+ }
+ // Idempotent operation to clean orphaned data.
+ if _, execErr := tx.Exec(ctx, deleteOrphanedData); execErr != nil {
+ return wrapError(execErr, "delete version")
+ }
+ return nil
+ })
}Also applies to: 109-120
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/store/postgres/schema_repository.go` around lines 88 - 99, The
delete should be combined with the orphan cleanup inside a single DB transaction
so both succeed or both roll back: replace the separate r.db.Exec calls for
deleteSchemaQuery and deleteOrphanedData in the method that currently checks
ct.RowsAffected() with a r.db.BeginFunc transaction block, perform the DELETE
(deleteSchemaQuery) and ensure rowsAffected > 0 (return store.NoRowsErr via
wrapError if zero) then run the orphan cleanup (deleteOrphanedData) inside the
same transaction; propagate any error so the transaction rolls back and cache
invalidation logic in core/schema/service.go only runs when both ops committed.
Ensure this change is applied similarly to the other occurrence around lines
109-120.
5bd3110 to
9ba3de8
Compare
Replace zap logger with Go's standard log/slog throughout the codebase, establishing a single consistent logging approach. - pkg/logger: replace zap init with slog JSON handler + configurable level - pkg/logger/pgx.go: new slog adapter for pgx database logger with allowlisted safe keys to avoid logging sensitive bind arguments - config: add LogLevel field (debug/info/warn/error) - postgres: use slog adapter with warn-level DB logging - formats/json: replace zap warns with structured slog warns - middleware/recovery: use slog instead of stdlib log - server: use logger.Init() and slog throughout Closes #178
- Invalidate ristretto cache entries when schemas/versions are deleted, preventing stale data from being served after recreation - Return not-found error when deleting non-existent namespaces, schemas, or versions by checking RowsAffected - Check ListVersions error before proceeding with schema delete - Log encode errors instead of attempting http.Error after WriteHeader - Add Del method to Cache interface Fixes #180 Fixes #145
ddbd0a1 to
75a9e2b
Compare
- Upgrade CI: actions/checkout@v6, actions/setup-go@v6, golangci-lint-action@v9, goreleaser-action@v7, setup-protoc@v3 - Use go-version-file instead of hardcoded Go versions - Add concurrency groups and path filters - Update golangci-lint config to v2 schema - Fix all errcheck issues (unchecked MarkFlagRequired, json.Encode, etc.) - Fix staticcheck issues (deprecated ioutil/rand, redundant nil checks) - Add comments on exported types
75a9e2b to
86b8a59
Compare
Summary
go.uber.org/zapwith Go's standardlog/slogacross the entire codebaseLogLevelto server config (debug/info/warn/error)Changes
pkg/logger/logger.go: replaced zapinit()withInit(logLevel)that configures slog with JSON handler, source info, and configurable levelpgx.go: new slog adapter implementingpgx.Loggerinterface for database loggingconfig/config.goLogLevelfield (default:info)internal/server/server.gologger.Init(cfg.LogLevel)at startuplog.Fatalln/log.Fatalfwithslog.Error+os.Exit(1)internal/store/postgres/postgres.goLogLevelWarnto reduce DB query noiseformats/json/compatibility.go: replacelogger.Logger.Warn(fmt.Sprintf(...))withslog.Warn(..., "type", schemaTypes)schema.go: replace zap warns with structured slog warns including error contextinternal/middleware/recovery.golog.Printfwithslog.ErrorContextRemoved dependencies:
go.uber.org/zap,pgx/v4/log/zapadapterCloses #178
Test plan
go build ./...passesgo test ./...passes (except pre-existingformats/protobuf)loglevel: debug— DB queries visibleloglevel: warn— DB queries suppressed